3 research outputs found

    Social Visual Behavior Analytics for Autism Therapy of Children Based on Automated Mutual Gaze Detection

    Full text link
    Social visual behavior, as a type of non-verbal communication, plays a central role in studying social cognitive processes in interactive and complex settings of autism therapy interventions. However, for social visual behavior analytics in children with autism, it is challenging to collect gaze data manually and evaluate them because it costs a lot of time and effort for human coders. In this paper, we introduce a social visual behavior analytics approach by quantifying the mutual gaze performance of children receiving play-based autism interventions using an automated mutual gaze detection framework. Our analysis is based on a video dataset that captures and records social interactions between children with autism and their therapy trainers (N=28 observations, 84 video clips, 21 Hrs duration). The effectiveness of our framework was evaluated by comparing the mutual gaze ratio derived from the mutual gaze detection framework with the human-coded ratio values. We analyzed the mutual gaze frequency and duration across different therapy settings, activities, and sessions. We created mutual gaze-related measures for social visual behavior score prediction using multiple machine learning-based regression models. The results show that our method provides mutual gaze measures that reliably represent (or even replace) the human coders' hand-coded social gaze measures and effectively evaluates and predicts ASD children's social visual performance during the intervention. Our findings have implications for social interaction analysis in small-group behavior assessments in numerous co-located settings in (special) education and in the workplace.Comment: Accepted to IEEE/ACM international conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE) 202

    Immersive Virtual Reality and Robotics for Upper Extremity Rehabilitation

    Full text link
    Stroke patients often experience upper limb impairments that restrict their mobility and daily activities. Physical therapy (PT) is the most effective method to improve impairments, but low patient adherence and participation in PT exercises pose significant challenges. To overcome these barriers, a combination of virtual reality (VR) and robotics in PT is promising. However, few systems effectively integrate VR with robotics, especially for upper limb rehabilitation. Additionally, traditional VR rehabilitation primarily focuses on hand movements rather than joint movements of the limb. This work introduces a new virtual rehabilitation solution that combines VR with KinArm robotics and a wearable elbow sensor to measure elbow joint movements. The framework also enhances the capabilities of a traditional robotic device (KinArm) used for motor dysfunction assessment and rehabilitation. A preliminary study with non-clinical participants (n = 16) was conducted to evaluate the effectiveness and usability of the proposed VR framework. We used a two-way repeated measures experimental design where participants performed two tasks (Circle and Diamond) with two conditions (VR and VR KinArm). We found no main effect of the conditions for task completion time. However, there were significant differences in both the normalized number of mistakes and recorded elbow joint angles (captured as resistance change values from the wearable sensor) between the Circle and Diamond tasks. Additionally, we report the system usability, task load, and presence in the proposed VR framework. This system demonstrates the potential advantages of an immersive, multi-sensory approach and provides future avenues for research in developing more cost-effective, tailored, and personalized upper limb solutions for home therapy applications.Comment: Submitted to International Journal of Human-Computer Interactio

    Toward interprofessional team training for surgeons and anesthesiologists using virtual reality

    No full text
    Purpose!#!In this work, a virtual environment for interprofessional team training in laparoscopic surgery is proposed. Our objective is to provide a tool to train and improve intraoperative communication between anesthesiologists and surgeons during laparoscopic procedures.!##!Methods!#!An anesthesia simulation software and laparoscopic simulation software are combined within a multi-user virtual reality (VR) environment. Furthermore, two medical training scenarios for communication training between anesthesiologists and surgeons are proposed and evaluated. Testing was conducted and social presence was measured. In addition, clinical feedback from experts was collected by following a think-aloud protocol and through structured interviews.!##!Results!#!Our prototype is assessed as a reasonable basis for training and extensive clinical evaluation. Furthermore, the results of testing revealed a high degree of exhilaration and social presence of the involved physicians. Valuable insights were gained from the interviews and the think-aloud protocol with the experts of anesthesia and surgery that showed the feasibility of team training in VR, the usefulness of the system for medical training, and current limitations.!##!Conclusion!#!The proposed VR prototype provides a new basis for interprofessional team training in surgery. It engages the training of problem-based communication during surgery and might open new directions for operating room training
    corecore